In 2006 was the discovery of techniques for learning in so-called deep neural networks. These techniques are now known as deep learning. They've been developed further, and today deep neural networks and deep learning achieve outstanding performance on many important problems in computer vision, speech recognition, and natural language processing!
“In a neural network we don't tell the computer how to solve our problem. Instead, it learns from observational data, figuring out its own solution to the problem at hand.”
Classification: “Classification" indicates that the data has discrete class label. Classification predictive modeling is the task of approximating a mapping function (f) from input variables (X) to discrete output variables (y) or classes. The output variables are often called labels or categories. The mapping function predicts the class or category for a given observation
Regression: Regression predictive modeling is the task of approximating a mapping function (f) from input variables (X) to a continuous output variable (y). A continuous output variable is a real-value, such as an integer or floating point value. These are often quantities, such as amounts and sizes. For example, a house may be predicted to sell for a specific dollar value, perhaps in the range of $100,000 to $200,000.
The Iris flower data set or Fisher's Iris data set is a multivariate data set introduced by the British statistician, eugenicist, and biologist Ronald Fisher in his 1936 paper The use of multiple measurements in taxonomic problems as an example of linear discriminant analysis. It is sometimes called Anderson's Iris data set because Edgar Anderson collected the data to quantify the morphologic variation of Iris flowers of three related species. Two of the three species were collected in the Gaspé Peninsula "all from the same pasture, and picked on the same day and measured at the same time by the same person with the same apparatus". Fisher's paper was published in the journal, the Annals of Eugenics, creating controversy about the continued use of the Iris dataset for teaching statistical techniques today.
The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. Based on the combination of these four features, Fisher developed a linear discriminant model to distinguish the species from each other.
import pandas as pd
dataset=pd.read_csv('iris.csv').values
data=dataset[:,0:4]
target=dataset[:,4]
from keras.models import Sequential
from keras.layers import Dense
model=Sequential()
#an empty NN created
model.add(Dense(64,input_dim=4,activation='relu'))
model.add(Dense(128,activation='relu'))
model.add(Dense(64,activation='relu'))
model.add(Dense(3,activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='sgd',metrics=['accuracy'])
model.summary()
new_target=[]
for i in target:
if(i=='setosa'):
new_target.append(0)
elif(i=='versicolor'):
new_target.append(1)
else:
new_target.append(2)
from keras.utils import np_utils
new_target=np_utils.to_categorical(new_target)
from sklearn.model_selection import train_test_split
train_data,test_data,train_target,test_target=train_test_split(data,new_target,test_size=0.1)
history=model.fit(train_data,train_target,epochs=100)
from matplotlib import pyplot as plt
plt.plot(history.history['loss'])
plt.plot(history.history['accuracy'])
predicted_target=model.predict(test_data)
print('Actual results:',test_target)
print('Predicted results:',predicted_target)
import numpy as np
print('Actual results:',np.argmax(test_target,axis=1))
print('Predicted results:',np.argmax(predicted_target,axis=1))